Tag
12 articles
Learn how AI companies like Anthropic are changing their pricing models for advanced features like OpenClaw, and why this matters for users of AI coding assistants.
Learn how AI companies are changing subscription policies for third-party tools like OpenClaw, and what this means for users and developers.
Anthropic's Claude AI now has the ability to directly control Mac and Windows desktops, enabling it to perform tasks like navigating applications and managing files autonomously.
A federal judge questioned the Pentagon's motivations for labeling Anthropic, the developer of Claude AI, as a supply-chain risk, suggesting the move may be an attempt to stifle the company's growth.
Anthropic has sued 17 U.S. federal agencies over government pressure to remove AI safety measures from its Claude model, raising questions about oversight and corporate autonomy in national security contexts.
Anthropic has sued the Department of Defense over its designation as a supply chain risk, calling the move "unprecedented and unlawful." The lawsuit raises important questions about government oversight of AI companies and national security concerns.
Anthropic has been officially designated as a supply chain risk to U.S. national security by the Pentagon, prompting CEO Dario Amodei to announce a legal challenge.
The U.S. Department of Defense has labeled AI firm Anthropic a supply-chain risk, banning defense contractors from using its Claude model. Anthropic calls the move unlawful and retaliatory.
The U.S. military is reportedly using Anthropic's Claude AI for AI-driven strike planning in Iran, despite the company being banned in Washington.
The State Department has replaced Anthropic's Claude AI with OpenAI's GPT-4.1, citing security and compliance concerns. Several federal agencies are reportedly following suit, signaling a cautious approach to AI adoption in government operations.
Anthropic has refused the Pentagon's demands for unrestricted access to its AI systems, citing concerns over lethal autonomous weapons and mass surveillance. The move marks a significant escalation in the debate over military AI development.
Anthropic accuses DeepSeek and other Chinese AI firms of using Claude to train their own AI models without authorization, involving the creation of 24,000 fake accounts and 16 million exchanges.